1,943 research outputs found
Entwicklung einer onlinegestützten Datenbank zur Aus- und Bewertung von Rückstandsfunden für die Bio-Kontrolle
Die Rückstandsanalytik bei Bio-Produkten ist ein sensibler Bereich. Werden Kontaminationen in Bio-Ware gefunden, stellt sich bei der Bewertung die Frage, ob unzureichende Vorbeugemaßnahmen, eine direkte, unzulässige Anwendung oder eine Vermischung mit konventioneller Ware die Ursache für den Rückstandsfund sein können.
Vor allem bei Analyseergebnissen im niederschwelligen Bereich sind Öko-Kontrollstellen, Labore und zuständige Behörden bei der Interpretation oft vor schwierige Aufgaben gestellt. Mit der zweisprachigen Online-Datenbank resi.bio wurde eine Möglichkeit geschaffen, Fallbeschreibungen und ihre Bewertung anonymisiert zu hinterlegen und zu diskutieren. Dies erleichtert in Zukunft die Interpretation ähnlich gelagerter Untersuchungsergebnisse und ermöglicht eine Vereinheitlichung der Bewertung. Darüber hinaus bietet die Datenbank eine Grundlage für die risikoorientierte Ausrichtung von Probennahmen und Analytik. Die Datenbank ist nicht öffentlich zugänglich. Zielgruppen sind Öko-Kontrollstellen und Labore, die das Projekt mit Daten unterstützen sowie die zuständigen Behörden
A Systematic Approach to Constructing Incremental Topology Control Algorithms Using Graph Transformation
Communication networks form the backbone of our society. Topology control
algorithms optimize the topology of such communication networks. Due to the
importance of communication networks, a topology control algorithm should
guarantee certain required consistency properties (e.g., connectivity of the
topology), while achieving desired optimization properties (e.g., a bounded
number of neighbors). Real-world topologies are dynamic (e.g., because nodes
join, leave, or move within the network), which requires topology control
algorithms to operate in an incremental way, i.e., based on the recently
introduced modifications of a topology. Visual programming and specification
languages are a proven means for specifying the structure as well as
consistency and optimization properties of topologies. In this paper, we
present a novel methodology, based on a visual graph transformation and graph
constraint language, for developing incremental topology control algorithms
that are guaranteed to fulfill a set of specified consistency and optimization
constraints. More specifically, we model the possible modifications of a
topology control algorithm and the environment using graph transformation
rules, and we describe consistency and optimization properties using graph
constraints. On this basis, we apply and extend a well-known constructive
approach to derive refined graph transformation rules that preserve these graph
constraints. We apply our methodology to re-engineer an established topology
control algorithm, kTC, and evaluate it in a network simulation study to show
the practical applicability of our approachComment: This document corresponds to the accepted manuscript of the
referenced journal articl
A Systematic Approach to Constructing Families of Incremental Topology Control Algorithms Using Graph Transformation
In the communication systems domain, constructing and maintaining network
topologies via topology control (TC) algorithms is an important cross-cutting
research area. Network topologies are usually modeled using attributed graphs
whose nodes and edges represent the network nodes and their interconnecting
links. A key requirement of TC algorithms is to fulfill certain consistency and
optimization properties to ensure a high quality of service. Still, few
attempts have been made to constructively integrate these properties into the
development process of TC algorithms. Furthermore, even though many TC
algorithms share substantial parts (such as structural patterns or tie-breaking
strategies), few works constructively leverage these commonalities and
differences of TC algorithms systematically. In previous work, we addressed the
constructive integration of consistency properties into the development
process. We outlined a constructive, model-driven methodology for designing
individual TC algorithms. Valid and high-quality topologies are characterized
using declarative graph constraints; TC algorithms are specified using
programmed graph transformation. We applied a well-known static analysis
technique to refine a given TC algorithm in a way that the resulting algorithm
preserves the specified graph constraints.
In this paper, we extend our constructive methodology by generalizing it to
support the specification of families of TC algorithms. To show the feasibility
of our approach, we reneging six existing TC algorithms and develop e-kTC, a
novel energy-efficient variant of the TC algorithm kTC. Finally, we evaluate a
subset of the specified TC algorithms using a new tool integration of the graph
transformation tool eMoflon and the Simonstrator network simulation framework.Comment: Corresponds to the accepted manuscrip
Applied image recognition: guidelines for using deep learning models in practice
In recent years, novel deep learning techniques, greater data availability, and a significant growth in computing powers have enabled AI researchers to tackle problems that had remained unassailable for many years. Furthermore, the advent of comprehensive AI frameworks offers the unique opportunity for adopting these new tools in applied fields. Information systems research can play a vital role in bridging the gap to practice. To this end, we conceptualize guidelines for applied image recognition spanning task definition, neural net configuration and training procedures. We showcase our guidelines by means of a biomedical research project for image recognition
A keyquery-based classification system for CORE
We apply keyquery-based taxonomy composition to compute a classification system for the CORE dataset, a shared crawl of about 850,000 scientific papers. Keyquery-based taxonomy composition can be understood as a two-phase hierarchical document clustering technique that utilizes search queries as cluster labels: In a first phase, the document collection is indexed by a reference search engine, and the documents are tagged with the search queries they are relevant—for their so-called keyqueries. In a second phase, a hierarchical clustering is formed from the keyqueries within an iterative process. We use the explicit topic model ESA as document retrieval model in order to index the CORE dataset in the reference search engine. Under the ESA retrieval model, documents are represented as vectors of similarities to Wikipedia articles; a methodology proven to be advantageous for text categorization tasks. Our paper presents the generated taxonomy and reports on quantitative properties such as document coverage and processing requirements
- …